The Future(s)

No obvious connection between these links, other than future(s). First up is The future will be about programming languages an MP3 recording of a talk from Ted Neward:

During this [hour long] Jazoon keynote Ted Neward talks about why the next five years in IT will be about languages. The programming language virtualization, tools, linguistic focus and expressiveness are different forces that are coming of age. Not to mention the impact of the over-used and over-hyped Domain-Specific languages.

Though I wanted to enjoy a talk that touches on the importance of PL diversity, (with mentions of FP, AOP, Lisp, Ruby, Groovy, Intercal, and Ook), I didn't really gather much new. The audience is Java programmers, so perhaps the ideas are a tad revolutionary for that crowd. Still, might be of interest to those who want to know how PL innovations might play out in Java land.

On a different note, one possible way to gather prognistications is to conduct a poll asking readers about the Top 10 programming languages of the future. But then you end up with answers that look disturbingly similar to what people are doing in the present. Hopefully Ehud's query will yield more intriguing results.

Finally, there are a couple of articles on futures as a PL feature for concurrent programming: Futures in Objective-C and Microsoft's planned library to Optimize Managed Code For Multi-Core Machines which has tasks and futures. Alice ML demonstrates that futures can be a powerful PL feature. But we do run square into the question of whether providing a feature as a library really gives the same level of expressiveness that one gains from integrating it into the core of a language. Still, it's nice to see futures starting to come into wider play.

Tim Bray and Erlang

Erlang is getting a lot of attention these days (LtU was there first, of course: we had in depth discussions of Erlang several years ago). One recent discussion is around Tim Bray's experiments with Erlang. Steve Vinoski provides timing results showing the effect of utilizing Erlangs concurrency features on Tim's challenge.

Fair cooperative multithreading, or: Typing termination in a higher-order concurrent imperative language

Fair cooperative multithreading, or: Typing termination in a higher-order concurrent imperative language. September 2007. A preliminary version (without proofs, recursion and region-scoping) appeared in the proceedings of CONCUR'07, LNCS 4703 (2007), 272-286.

We propose a new operational model for shared variable concurrency, in
the context of a concurrent, higher-order imperative language à la ML.
In our model the scheduling of threads is cooperative, and a
non-terminating process suspends itself on each recursive call. A
property to ensure in such a model is fairness, that is, any thread
should yield the scheduler after some finite computation. To this end,
we follow and adapt the classical method for proving termination in
typed formalisms, namely the realizability technique. There is a
specific difficulty with higher-order state, which is that one cannot
define a realizability interpretation simply by induction on types,
because applying a function may have side-effects at types not smaller
than the type of the function. Moreover, such higher-order side-effects
may give rise to computations that diverge without resorting to explicit
recursion. We overcome these difficulties by introducing a type and
effect system for our language that enforces a stratification of the
memory. The stratification prevents the circularities in the memory that
may cause divergence, and allows us to define a realizability
interpretation of the types and effects, which we then use to prove the
intended termination property. Our realizability interpretation also
copes with dynamic thread creation.

Slides are available, not just the paper, and provide a nice introduction to approaches for proving termination, which may be useful in their own right to those not familiar with the topic.

Code Splitting for Network Bound Web 2.0 Applications

Code Splitting for Network Bound Web 2.0 Applications. Benjamin Livshits, Chen Ding. August 2007.

Modern Web 2.0 applications such as Gmail, Live Maps, MySpace, Flickr and many others have become a common part of everyday life. These applications are network bound, meaning that their performance can and does vary a great deal based on network conditions. However, there has not been much systematic research on trying to optimize network usage of these applications to make them more responsive for end-user interactions. Interestingly enough, code itself, usually in the form of JavaScript that runs within the client browser, constitutes a significant fraction of what needs to be transferred to the client for the application to run. Therefore, one way to significantly improve the perceived client-side performance for a range ofWeb 2.0 applications is to perform judicious decomposition — or splitting — of code and only transfer the code that is necessary just before it is needed. In this paper we propose several code splitting algorithms and explore their effectiveness at improving the responsiveness of large sophisticated Web 2.0 sites.

Once upon a time there was a lot of research on mobile code. It seemed like it was going nowhere. Then came Javascript, AJAX and Web 2.0, and in a worse-is-better fashion Javascript became the mobile code platform of choice. Maybe now is a good time to resume research on the subject...

Binary Lambda Calculus and Combinatory Logic

While Anton was waxing about Church & Turing, I figured that Occam's Razor would be the type of proof one would postulate when giving the nod to Lambda Calculus over Universal Turing Machines. This leads inexorably to the question of what is the smallest (as measured in binary bits) Turing Machine that can possibly be constructed. John Tromp provides an answer to this question in his always fun Lambda Calculus and Combinatory Logic Playground:

Pictured you can see the 210 bit binary lambda calculus self-interpreter, and the 272 bit binary combinatory logic self-interpreter. Both are explained in detail in my latest paper available in PostScript and PDF. This design of a minimalistic universal computer was motivated by my desire to come up with a concrete definition of Kolmogorov Complexity, which studies randomness of individual objects. All ideas in the paper have been implemented in the the wonderfully elegant Haskell language, which is basically pure typed lambda calculus with lots of syntactic sugar on top.

Interestingly, the version based on the Lambda Calculus is smaller than the one on Combinators. A statement I found of interest in the paper about PL's:

Although information content may seem to be highly dependent on choice of programming language, the notion is actually invariant up to an additive constant.

Not sure if that statement means that PL research is ultimately doomed. :-)

Online video course on monads

The n-category café give the links to an interesting series of online courses about category-theorical monads and their link to algebras. I found them clear, precise and very helpful, even if they are generally more oriented towards mathematicians than computer scientists.

  • Monads 1: Definition of monad. Some examples.
  • Monads 2: Examples continuation. The monad for categories
  • Monads 3: Algebras for monads
  • Monads 3a: Answers to some questions
  • Monads 4: The category of algebras of a monad

Blackboards rules ! :)

Minimal FORTH compiler and tutorial

Rich Jones writes: I wanted to understand how FORTH is implemented, so I implemented it and wrote a step-by-step tutorial on what every bit does.

The tutorial is inside a (literate) code file you can download and run.

I've been told recently by people I trust that it is about time I learned Forth. This may be just what I was waiting for...

Compositional type systems for stack-based low-level languages

Compositional type systems for stack-based low-level languages
by Ando Saabas and Tarmo Uustalu, appearing in Proceedings of the 12th Computing: The Australasian Theroy Symposium - 2006.

It is widely believed that low-level languages with jumps must be difficult to reason about by being inherently non-modular. We have recently argued that this in untrue and proposed a novel method for developing compositional natural semantics and Hoare logics for low-level languages and demonstrated its viability on the example of a simple low-level language with expressions (Saabas & Uustalu 2005). The central idea is to use the implicit structure of finite disjoint unions present in low-level code as an (ambiguous) phrase structure. Here we apply our method to a stack-based language and develop it further. We define a compositional natural semantics and Hoare logic for this language and go then on to show that, in addition to Hoare logics, one can also derive compositional type systems as weaker specification languages with the same method. We describe type systems for stack-error freedom and secure information flow.

I encountered the article while researching compositional type systems. I would like to hear people's thoughts about this article and compositional type-systems in general (are they interesting, esoteric, mundane?).

DivaScheme

Since it's still awfully quiet, I thought mentioning DivaScheme would be a good idea, seeing as IDEs always get the juices flowing...

When DivaScheme is on, the most important functions of DrScheme are available through unchorded keystrokes, and the motion commands operate on sexps by default. DivaScheme also takes cares of layout concerns. It automatically maintains the white spaces, the indentation, and the balance of the parentheses.

This description does not do it justice. You can read the docs or see the movie (links on the DivaScheme site).

Efficient, Correct Simulation of Biological Processes in the Stochastic Pi-calculus

Efficient, Correct Simulation of Biological Processes in the Stochastic Pi-calculus. Andrew Phillips, Luca Cardelli. September 2007.

In recent years, there has been considerble research on designing programming languages for complex parallel computer systems. Interestingly, some of this research is also applicable to biological systems, which are typically highly complex and massively parallel. In particular, a mathematical programming language known as the stochastic Pi-calculus has recently been used to model and simulate a range of biological systems...

...the prototype simulator presented in this paper will form the basis of the next release of the Stochastic Pi Machine.

SPiM, The Stochastic Pi Machine, is here, where you'll find tutorials and related information.

A nice introduction to the Gillespie algorithm can be found here.